Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Do you put ketchup on the hot dog you are going to consume?

  • Yes, always
  • No, never
  • Only when it would be socially awkward to refuse
  • Not when I'm in Chicago
  • Especially when I'm in Chicago
  • I don't eat hot dogs
  • What is this "hot dog" of which you speak?
  • It's spelled "catsup" you insensitive clod!

[ Results | Polls ]
Comments:70 | Votes:199

posted by hubie on Tuesday February 03, @07:04PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

NASA is setting up an anomaly review board to look into the fate of its Mars Atmosphere and Volatile Evolution (MAVEN) spacecraft, which was last heard from on December 6.

Attempts to make contact with the Mars orbiter are ongoing. The final fragments of data indicated that the spacecraft was tumbling and had possibly changed trajectory. The MAVEN team is analyzing snippets of data recovered from a December 6 radio science campaign to develop a timeline of possible events and likely root causes of the issue.

James Godfrey, retired Spacecraft Operations Manager for ESA's Mars Express, pondered what might have happened to MAVEN in a message to The Register.

"The fact that it appears to be rotating in an unexpected manner (tumbling?) and might have experienced an orbital change (I guess from inconsistent Doppler data) does suggest an energetic event.

"It's unlikely that anything has hit it – not much space debris at Mars. So more likely something onboard."

If the spacecraft had entered a normal safe mode, controllers should have been able to communicate with it. "So whatever has happened, it hasn't been able to reach safe mode for some unknown reason," Godfrey speculated.

"So problems that could result in loss of attitude, possible orbit change, would suggest problems affecting GNC [Guidance, Navigation, and Control]. Could be an onboard computer failure, stuck valve, run out of fuel etc. Possibly a problem with the reaction wheels? In any case, something that caused the thrusters to fire in an unbalanced fashion from which the spacecraft was unable to recover autonomously."

All possibilities are bad news for MAVEN, both as a mission and a telecommunications relay for NASA's Mars rovers, Curiosity and Perseverance. The spacecraft entered Mars orbit on September 22, 2014, with a two-year planned mission. It has since endured for well over a decade, gathering data on the planet's atmosphere.

Attempts to contact the probe were further complicated by the solar conjunction, when the Sun lies between the Earth and Mars, blocking communication.

Godfrey noted this was "a more challenging conjunction than the run-of-the-mill" and the fact "the Sun being very active at the moment won't help."

All of which makes MAVEN's recovery at this stage improbable. The thermal and power status of the spacecraft is not known, nor is its location.

The assembly of a formal anomaly review board is an indicator that, while NASA has yet to throw in the towel, things are not looking good for MAVEN, and managers want to understand what happened.


Original Submission

posted by hubie on Tuesday February 03, @02:17PM   Printer-friendly
from the year-of-the-French-desktop dept.

Arthur T Knackerbracket has processed the following story:

As geopolitical tensions abound, France is going all in on its strategy to stop using foreign software vendors, announcing plans to move departments to homegrown Visio.

France’s David Amiel, minister for the civil service and state reform, is expected to issue a mandate to all government departments in coming days, to cease using US videoconferencing products like Zoom, Microsoft Teams and Google Meet, in favour of French-developed Visio. The government says it will be used in all Government departments by 2027, according to reporting from Euronews.

France has long telegraphed its determination to gain control over it digital infrastructure, and its strategy to favour homegrown vendors over their US counterparts. All this as digital sovereignty is becoming a burning issue in Europe.

Back in 2020, Brussels-based GAIA-X was formed to align with the EU’s Digital Strategy to enhance Europe’s competitiveness in the digital economy while safeguarding data and digital infrastructure from external influence. The Gaia-X European Association for Data and Cloud AISBL is composed of members from industry, research organisations, and government bodies. GAIA-X is backed by European governments, particularly Germany and France, according to the OECD.

As for France, this latest move is designed, says Amiel, to “end the use of non-European solutions and guarantee the security and confidentiality of public electronic communications by relying on a powerful and sovereign tool”.

Visio is part of France’s Suite Numérique, a digital suite of sovereign tools for civil servants, and is hosted on another French company’s sovereign cloud infrastructure, Outscale (a Dassault Systèmes subsidiary). French start-up Pyannote supplies the AI transcription and diary tools. Just last summer civil servants were ordered off WhatsApp and Telegram and told to use Tchap, a messaging service created specifically for them.

The French Government says it could save up to €1m a year in licensing fees through the switch to Visio, but that appears to be a side bonus, as the real goal is to cut its reliance on foreign providers for its critical digital infrastructure.

“This strategy highlights France’s commitment to digital sovereignty amid rising geopolitical tensions and fears of foreign surveillance or service disruptions,” Amiel said.


Original Submission

posted by janrinok on Tuesday February 03, @09:29AM   Printer-friendly

The Wall Street Journal reports that NVIDIA's plan to invest $100 Billion in OpenAI may fall through.

Links:

According to "people familiar with the matter," Jen-Sen Huang has been privately downplaying the $100 billion / 10 gigawatt deal that was announced with OpenAI this past September. According to the WSJ's sources, talks between the two companies never got past "early stages." The article also claims that Jen-Sen has asserted, in private, that the September deal was non-binding. This is corroborated by a November filing by Nvidia admitting that there was "no assurance" of a "definitive agreement" with OpenAI. (CNBC source: https://www.cnbc.com/2025/11/19/nvidia-says-no-assurance-of-deal-with-openai-after-100-billion-pact.html )

Furthermore, on Saturday, Jen-Sen told reporters in Taipei that, while Nvidia will invest "a great deal of money" in OpenAI's latest funding round, it would be "nothing like" $100 billion. (Bloomberg link: https://www.bloomberg.com/news/articles/2026-01-31/nvidia-to-join-openai-s-current-funding-round-huang-says ).

However, the NVidia CEO Jensen Huang said Saturday that a recent report of friction between his company and OpenAI was "nonsense."

This is probably a good reminder to be skeptical of media reports of a deal in dollars or in gigawatts. Was a contract actually signed, or was it just an announcement?


Original Submission

posted by janrinok on Tuesday February 03, @04:43AM   Printer-friendly
from the self-extinguished dept.

You can determine "if you're at risk and take action today:

If you think your Windows computer is safe from prying eyes, think again. A new report reveals that Microsoft has the encryption keys to your hard drive, and it can even give them out to law enforcement, including the FBI. Here's what you need to know and what you can do to stop it from happening to you.

In a stunning breach of personal privacy and security, Microsoft admitted in January that it provided the FBI with the BitLocker recovery keys to three different Windows PCs that were linked to suspected COVID unemployment assistance fraud in Guam. With these keys, the FBI was able to access the files on those devices as part of its investigation.

[...] The Redmond tech giant received its first request from a government official during the Obama administration in 2013. Although the engineer who spoke with the official reportedly declined to build a back door into Windows that would give the government unbridled access to user files, Microsoft still admits to turning over BitLocker recovery keys to law enforcement as recently as 2025. According to the report, Microsoft receives approximately 20 access requests from the FBI per year.

[...] You are not at risk if ...

  • You use a Windows PC without a Microsoft account. (You haven't logged into the system with your Outlook email address.)
  • You use a Windows PC with a Microsoft account but you chose a local recovery key backup option at activation.
  • You disabled BitLocker encryption when you set up your PC.

You are at risk if ...

  • You use a Windows PC with a Microsoft Outlook account and you chose to back up your BitLocker recovery key to your account.
  • Your PC is a work machine that's managed by your employer.

For those at risk, Microsoft promises that it only gives out encryption keys to lawful requests from the government. That said, if Microsoft can access your encryption keys, what's stopping a hacker from getting them? The problem with storing security keys on cloud servers is that anyone can reach them with the right password, login information, or exploit.

Previously: Microsoft Gave FBI a Set of BitLocker Encryption Keys to Unlock Suspects' Laptops

Related: Over Half a Million Windows Users are Switching to Linux


Original Submission

posted by janrinok on Monday February 02, @11:58PM   Printer-friendly
from the TIMMEH! dept.

Now is the "battle for the soul" of the internet according to Tim Berners-Lee. It's not to late to fix the web.

Founder of the world wide web says commercialisation means the net has been 'optimised for nastiness', but collaboration and compassion can prevail

Berners-Lee traces the first corruption of the web to the commercialisation of the domain name system, which he believes would have served web users better had it been managed by a nonprofit in the public interest. Instead, he says, in the 1990s the .com space was pounced on by "charlatans".

"It's only a small part of the whole internet ... but the problem is that people spend a lot of time on [social media websites] because they're addictive," he says.

So money is the root of all the evil then ... Or in their case perhaps it's how they make their money. Or did it just turbo charge Greed?

Compounding the problem is monopolisation. Facebook and Google's dominance is bad for innovation and bad for the web,

I would like to see a Cern for AI, where all the top scientists come together and see whether they can make a super intelligence.

Not sure what it pays to work at CERN but I doubt it's Google and FaceMeta money. So unless all the scientist are supposed to be altruists ...

Not sure I share his optimism. It has become quite soulless, commercial/corporate, bigbrother:y and well somewhat "evil". Perhaps it's just time to slay the beast, stake it once and for all and build something new and better on its festering carcass. Too bad to save. Time to put it out of its misery?

https://www.theguardian.com/technology/2026/jan/29/internet-inventor-tim-berners-lee-interview-battle-soul-web


Original Submission

posted by janrinok on Monday February 02, @07:12PM   Printer-friendly
from the as-the-world-turns dept.

https://reactos.org/blogs/30yrs-of-ros/

Today marks 30 years since the first commit to the ReactOS source tree.
[...]
ReactOS started from the ashes of the FreeWin95 project, which aimed to provide a free and open-source clone of Windows 95. FreeWin95 suffered from analysis paralysis, attempting to plan the whole system before writing any code. Tired of the lack of progress on the project, Jason Filby took the reins as project coordinator and led a new effort targeting Windows NT. The project was renamed to "ReactOS" as it was a reaction to Microsoft's monopolistic position in home computer operating systems.
[...]
While writing this article, I reached out to Eric Kohl. He developed the original storage driver stack for ReactOS [...]

"I think I found ReactOS while searching for example code for my contributions to the WINE project. I subscribed to the mailing list and followed the discussions for a few days. The developers were discussing the future of shell.exe, a little command line interpreter that could only change drives and directories and execute programs. A few days [later] I had started to convert the FreeDOS command.com into a Win32 console application, because I wanted to extend it to make it 4DOS compatible. 4DOS was a very powerful command line interpreter. On December 4th, 1998 I introduced myself and suggested to use my converted FreeDOS command.com as the future ReactOS cmd.exe. I had a little conversation with Jason Filby and Rex Joliff, the CVS repository maintainer. I sent my cmd.exe code to Rex and he applied it to the repository. After applying a few more cmd-related patches over the next weeks, Rex asked me whether I would like to have write-access to the repository. I accepted the offer...
[...]
There was always an open and friendly atmosphere. It was and still is always nice to talk to other developers. No fights, no wars, like in some other projects."

[...]
Public interest grew as ReactOS matured. In October 2005, Jason Filby stepped down as project coordinator, and Steven Edwards was voted to be the next project coordinator.
[...]
Steven Edwards strengthened the project's intellectual property policy and the project made the difficult decision to audit the existing source code and temporarily freeze contributions.
[...]
Following challenges with the audit, Steven Edwards stepped down as project coordinator and Aleksey Bragin assumed the role by August 2006.

Despite the challenges during this time, ReactOS 0.3.x continued to build upon ReactOS's legacy. ReactOS 0.3.0 was released on August 28th, 2006.
[...]
ReactOS 0.4.0 was released on February 16th, 2016. It introduced a new graphical shell that utilized more Windows features and was more similar architecturally to Windows Explorer. ReactOS 0.4.0 also introduced support for kernel debugging using WinDbg when compiled with MSVC. Being able to use standard Windows tools for kernel debugging has helped us progress considerably. ReactOS 0.4.0 continued to receive incremental updates every few months up until versions 0.4.14 and 0.4.15 which had years of development updates each. Today, the x86_64 port of ReactOS is similarly functional to its x86 counterpart, but with no WoW64 subsystem to run x86 apps its usability is limited.
[...]
Behind the scenes there are several out-of-tree projects in development. Some of these exciting projects include a new build environment for developers (RosBE), a new NTFS driver, a new ATA driver, multi-processor (SMP) support, support for class 3 UEFI systems, kernel and usermode address space layout randomization (ASLR), and support for modern GPU drivers built on WDDM.

The future of ReactOS will be written by the people who believe in the mission and are willing to help carry it forward.

If you believe in running "your favorite Windows apps and drivers in an open-source environment you can trust", you can help make that a reality by making a financial contribution, opening a pull request on GitHub, or testing and filing bug reports. Even small contributions can help a lot!

Previously on SoylentNews:
ReactOS 0.4.15 Released - 20250326
Watch: Mac OS X 10.4 Running in Windows Alternative ReactOS via PearPC Emulator - 20180510
Alternatives to Win32...Win32 of course! ReactOS still making progress.... - 20160828
Release of ReactOS 0.4 Brings Open Source Windows Closer to Reality - 20160217
Ask Soylent: Can We Turn ReactOS into a Viable Alternative to Windows 10? - 20151021
NTFS Now Supported in ReactOS LiveCD - 20141106


Original Submission

posted by janrinok on Monday February 02, @02:28PM   Printer-friendly

Every time we speak, we're improvising:

"Humans possess a remarkable ability to talk about almost anything, sometimes putting words together into never-before-spoken or -written sentences," said Morten H. Christiansen, the William R. Kenan, Jr. Professor of Psychology in the College of Arts and Sciences.

We can improvise new sentences so readily, language scientists believe, because we have acquired mental representations of the patterns of language that allow us to combine words into sentences. The nature of those patterns and how they work, however, remains a puzzle in cognitive science, Christiansen said.

[...] For decades, scientists have believed we rely on a complex mental grammar to build sentences that have hierarchically organized structure – like a branching tree. But Christiansen and Nielsen suggest that our mental representations might be more like snapping together pre-assembled LEGO pieces (such as a door frame or a wheel set) into a complete model. Instead of intricate hierarchies, they propose, we use small, linear chunks of word classes like nouns and verbs – including short sequences that can't be formed by way of grammar, such as "in the middle of the" or "wondered if you."

[...] The prevailing theory since at least the 1950s is based on hierarchical, tree-like mental representations, setting humans apart from other animals, Christiansen said. In this view, words and phrases combine according to the principles of grammar into larger units called constituents. For example, in the sentence "She ate the cake," "the" and "cake" combine into a noun phrase "the cake", which then combines with "ate" into the verb phrase "ate the cake," and finally with "she" to make the sentence.

"But not all sequences of words form constituents," Christiansen and Nielsen wrote in a summary of their paper. "In fact, the most common three- or four-word sequences in language are often nonconstituents, such as 'can I have a' or 'it was in the.'"

Because they don't conform to grammar, nonconstituent sequences have been overlooked. But they do play a role in a speaker's knowledge of their language, the researchers found.

In experiments, an eye-tracking study and an analysis of phone conversations, they discovered that linear sequences of word classes can be "primed," meaning when we hear or read them once, we process them faster the next time. That's compelling evidence they're part of our mental representation of language, Christiansen said. In other words, they're a key part of our mental representation of language that goes beyond the rules of grammar.

"I think the main contribution is showing that traditional rules of grammar cannot capture all of the mental representations of language structure," Nielsen said.

"It might even be possible to account for how we use language in general with flatter structure," Christiansen said. "Importantly, if you don't need the more complex machinery of hierarchical syntax, then this could mean that the gulf between human language and other animal communication systems is much smaller than previously thought."

Journal Reference: Nielsen, Y.A., Christiansen, M.H. Evidence for the representation of non-hierarchical structures in language. Nat Hum Behav (2026). https://doi.org/10.1038/s41562-025-02387-z


Original Submission

posted by janrinok on Monday February 02, @09:42AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The European Commission has started proceedings to ensure Google complies with the Digital Markets Act (DMA) in certain ways. Specifically, the European Union’s executive arm has told Google to grant third-party AI services the same level of access to Android that Gemini has. "The aim is to ensure that third-party providers have an equal opportunity to innovate and compete in the rapidly evolving AI landscape on smart mobile devices," the Commission said in a statement.

The company will also have to hand over "anonymized ranking, query, click and view data held by Google Search" to rival search engines. The Commission says this will help competing companies to optimize their services and offer more viable alternatives to Google Search.

"Today’s proceedings under the Digital Markets Act will provide guidance to Google to ensure that third-party online search engines and AI providers enjoy the same access to search data and Android operating system as Google's own services, like Google Search or Gemini," said Henna Virkkunen, the Commission’s executive vice-president for tech sovereignty, security and democracy. "Our goal is to keep the AI market open, unlock competition on the merits and promote innovation, to the benefit of consumers and businesses."

The Commission plans to wrap up these proceedings in the next six months, effectively handing Google a deadline to make all of this happen. If the company doesn't do so to the Commission's satisfaction, it may face a formal investigation and penalties down the line. The Commission can impose fines of up to 10 percent of a company's global annual revenue for a DMA violation.

Google was already in hot water with the EU for allegedly favoring its own services — such as travel, finance and shopping — over those from rivals and stopping Google Play app developers from easily directing consumers to alternative, cheaper ways to pay for digital goods and services. The bloc charged Google with DMA violations related to those issues last March.

In November, the EU opened an investigation into Google's alleged demotion of commercial content on news websites in search results. The following month, it commenced a probe into Google's AI practices, including whether the company used online publishers' material for AI Overviews and AI Mode without "appropriate compensation" or offering the ability to opt out.


Original Submission

posted by hubie on Monday February 02, @04:57AM   Printer-friendly

Scientists baffled at mysterious ancient creature that doesn't fit on the tree of life as we know it:

A bizarre ancient life-form, considered to be the first giant organism to live on land, may belong to a totally unknown branch of the tree of life, scientists say.

These organisms were massive, with some species growing up to 26 feet (8 meters) tall and 3 feet (1 m) wide. Named Prototaxites, they lived around 420 million to 375 million years ago during the Devonian period and resembled branchless, cylindrical tree trunks.

Since the first Prototaxites fossil was discovered in 1843, scientists haven't been sure whether they were a plant, fungus or even a type of algae. However, chemical analyses of Prototaxites fossils in 2007 suggested they were likely a giant ancient fungus.

Now, according to a study published Wednesday (Jan. 21) in the journal Science Advances, Prototaxites might not have been a humongous fungus after all — rather, it may have been an entirely different and previously unknown — and now extinct — life-form.

"They are life, but not as we now know it, displaying anatomical and chemical characteristics distinct from fungal or plant life, and therefore belonging to an entirely extinct evolutionary branch of life," study lead co-author Sandy Hetherington, a research associate at the National Museums Scotland and senior lecturer from the School of Biological Sciences at the University of Edinburgh, said in a statement.

All life on Earth is classified within three domains — bacteria, archaea and eukarya — with eukarya containing all multicellular organisms within the four kingdoms of fungi, animals, plants and protists. Bacteria and archaea contain only single-celled organisms.

[...] However, according to this new research, Prototaxites may actually have been part of a totally different kingdom of life, separate from fungi, plants, animals and protists.

[...] Upon examining the internal structure of the fossilized Prototaxites, the researchers found that its interior was made up of a series of tubes, similar to those within a fungus. But these tubes branched off and reconnected in ways very unlike those seen in modern fungi.

"We report that fossils of Prototaxites taiti from the 407-million-year-old Rhynie chert were chemically distinct from contemporaneous Fungi and structurally distinct from all known Fungi," the researchers wrote in the study. "This finding casts doubt upon the fungal affinity of Prototaxites, instead suggesting that this enigmatic organism is best assigned to an entirely extinct eukaryotic lineage."

[...] Kevin Boyce, a professor at Stanford University, led the 2007 study that posited Prototaxites is a giant fungus and was not involved in this new research. However, he told New Scientist that he agreed with the study's findings.

"Given the phylogenetic information we have now, there is no good place to put Prototaxites in the fungal phylogeny," Boyce said. "So maybe it is a fungus, but whether a fungus or something else entirely, it represents a novel experiment with complex multicellularity that is now extinct and does not share a multicellular common ancestor with anything alive today."

Journal Reference: Corentin C. Loron, Laura M. Cooper, Seán F. Jordan, et al., Prototaxites fossils are structurally and chemically distinct from extinct and extant Fungi, Science Advances, 21 Jan 2026, Vol 12, Issue 4 DOI: 10.1126/sciadv.aec6277


Original Submission

posted by hubie on Monday February 02, @12:11AM   Printer-friendly
from the how-much-will-they-charge-for-the-RAM-it-comes-with? dept.

Arthur T Knackerbracket has processed the following story:

Nvidia's big consumer chips for PCs, the Arm-based N1 and N1X, could finally be about to arrive if a new rumor is correct.

A report from DigiTimes (hat tip to VideoCardz) claims that laptops with Nvidia's N1X chip inside will be launching in the first quarter of 2026. So, within the next two months.

These will target the consumer market, and three other variants will be on sale in Q2, we're told. Presumably, that includes the base N1 chip, which is less powerful, but still intended for producing 'high-end AI computing platforms' – the N1X is the more performant CPU which will be aimed at notebooks for professionals, the report observes.

There's still some confusion around the naming and where exactly the N1 and N1X will fit into the CPU landscape, with some guessing that the N1 will be a desktop chip, and the N1X a mobile (laptop) chip. However, DigiTimes makes it clear that both the N1 and N1X will appear in laptops (add your own seasoning, naturally). That doesn't mean that there couldn't be a desktop variant of one of these chips as well, though, and perhaps that's still planned.

Following the N1 series, the next-gen N2 silicon will take the baton for Nvidia in the third quarter of 2027, the report claims.

Obviously, be skeptical about that timeframe in particular, because even if Nvidia has plans for these N2 chips, this schedule may end up going awry (what with the silicon still being relatively early in development).

The rumor comes from supply chain sources, we're informed, and the delay of the N1 series – which was supposed to arrive late in 2025 as per the original speculation about Nvidia's Arm CPU – is due to Team Green fine-tuning these chips, and "Microsoft OS timelines", the report states.

The latter presumably refers to Windows 11 26H1, which is a new spin on the OS specifically for Snapdragon X2 chips – and seemingly Nvidia's N1 silicon, too, as that's Arm-based and a direct rival for Qualcomm's processors powering Windows 11 laptops. So, the launch of the N1 and N1X being put back to wait for this 26H1 update – which isn't being delivered to non-Arm Windows PCs (AMD and Intel) – makes sense.

Still, we must be cautious because, as already noted. I don't rank DigiTimes as one of the most reliable sources out there, but it can, on occasion, dig up useful and accurate rumors from the supply chain. The purported launch timing seems believable enough given what I've just outlined, and also we've heard rumors suggesting similar plans in the past – such as an Alienware laptop with an Nvidia CPU aiming for a Q1 2026 launch.

[...] A better question is if these laptops are that close, why didn't Nvidia show off the N1X at CES 2026 recently? I haven't got an answer for that one, except that maybe Team Green wants to carry out a standalone launch that gives the spotlight entirely to this new Arm-based silicon to make a big splash for the entrance of these laptops.


Original Submission

posted by hubie on Sunday February 01, @07:30PM   Printer-friendly
from the don't-look-now-but... dept.

Motor Trend has been running a short series on how car dealers do business in the internet age. If you haven't been to a new or used car dealer in 20+ years, things have changed and it hasn't gotten any easier to keep from being taken. As always, it's an asymmetric relationship--they deal with people all the time, you visit car dealers relatively infrequently. This installment is about discounts and very low advertized prices, https://www.motortrend.com/features/dealer-discounts-add-ons-fees-car-buying

In the first installment of the How to Buy a Car series, I talked about the changes that have taken place in car sales over the past three decades or so due to the internet. To recap, in the old days, everyone started high and negotiated down to the lowest price. Both buyers and sellers understood this. But thanks to the internet, that rule has fallen by the wayside. Because everyone shops on the internet first before ever leaving their house, the dealership that gets the business is the one with the lowest prices. The new rule is, "Lead with the Lowest Price and They'll Come."
[...]
When you get to the dealership, the salesperson sits you down and asks you a series of questions.

"Are you a member of Cheapco or similar big-box wholesaler?"

When you answer no, the salesperson draws a line through that discount.

"Are you a recent college graduate, or will you be graduating in the next year?"

You're 35 years old. You answer no. The salesperson draws a line through that discount.
...
Instead of paying $49,000, the crazy price that brought you there, your price just jumped four grand. (You probably won't see every one of these discounts used at the same time, but you get the idea.)

More details and some suggestions on how to prepare before you visit the dealer at the link.

[I'm curious if the experience dealing with automobile dealerships and sales people is similar around the world --Ed.]


Original Submission

posted by hubie on Sunday February 01, @02:45PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

It is 40 years since Voyager 2 performed the first and, so far, only flyby of the planet Uranus. The resulting trove of data, however, was a bonus that almost didn't happen.

At the time of Voyager 2's launch, Uranus wasn't part of the formal plan. The mission was referred to for a long time as the Mariner Jupiter-Saturn project. The JPL engineers famously had other ideas and ensured the spacecraft had enough fuel to continue on a trajectory to Uranus and beyond if the mission was approved.

As it was, Voyager 1 performing a successful flyby of Saturn's moon Titan meant that Voyager 2 could continue on the Grand Tour, taking in Uranus and Neptune.

Former Voyager scientist Garry Hunt told The Register: "It was a fantastic encounter because it almost didn't happen. After Saturn, we had the scan platform problem. If that problem had not been resolved, there wouldn't have been a Uranus encounter."

Following the Saturn encounter, the Voyager scan platform, an assembly that allowed cameras to pan and tilt, seized on the horizontal axis. The failure would have resulted in a significant data loss and was traced to a lubrication problem. Engineers were able to rectify the issue remotely, and the probe dodged a bullet on its way to Uranus.

"It was a testing encounter," recalled Hunt. "In the interim period between the '82 encounter with Saturn and getting to Uranus, the engineers had to reorganize how the scan platform was operating. The computer system had to be altered again. All the sequencing had to be dealt with in a new manner, and we had to prepare a wobbling spacecraft to take low-exposure images in a very dark environment and get that information back to Earth."

The focus had, after all, been on Jupiter and Saturn. While the probe's makers had filled the fuel tanks before launch, going to Uranus and Neptune was not a given. "We made sure, from an engineering perspective, it could do it. But they said, 'Oh dear, you haven't got any money.'"

The funding came, and Hunt recalled that serious work on what needed to be done started in early 1983. As well as software changes on the spacecraft (updates were made to use novel compression methods and avoid sending back black images when nothing was in view), antennas on Earth were upgraded to pick up the increasingly faint Voyager 2 signal.

"It was an incredible achievement," said Hunt, "an achievement for engineering, which science has obviously been able to explore more."

The flyby produced a tremendous amount of data about Uranus (or "George" if its 18th-century discoverer, William Herschel, had his way) – the planet had a magnetic field that was not aligned with its rotational axis. Additional rings appeared in Voyager 2's data, and images of the moon Miranda showed signs consistent with a violent impact that may have blown it apart and allowed it to reform.

[...] Finally, Hunt revealed that amid the flyby preparations, time was set aside to ensure everyone pronounced "Uranus" the approved way. "We had been briefed very strongly by the public relations people at JPL on how to pronounce 'Uranus' because the Australians were pronouncing it... incorrectly (which I will not mention)... and Americans found this somewhat embarrassing."


Original Submission

posted by jelizondo on Sunday February 01, @09:59AM   Printer-friendly
from the lemmings dept.

https://arstechnica.com/ai/2026/01/how-often-do-ai-chatbots-lead-users-down-a-harmful-path/

At this point, we've all heard plenty of stories about AI chatbots leading users to harmful actions, harmful beliefs, or simply incorrect information. Despite the prevalence of these stories, though, it's hard to know just how often users are being manipulated. Are these tales of AI harms anecdotal outliers or signs of a frighteningly common problem?

Anthropic took a stab at answering that question this week, releasing a paper studying the potential for what it calls "disempowering patterns" across 1.5 million anonymized real-world conversations with its Claude AI model.
[...]
In the newly published paper "Who's in Charge? Disempowerment Patterns in Real-World LLM Usage," [PDF] researchers from Anthropic and the University of Toronto try to quantify the potential for a specific set of "user disempowering" harms
[...]
Reality distortion:
Their beliefs about reality become less accurate (e.g., a chatbot validates their belief in a conspiracy theory)
Belief distortion:
Their value judgments shift away from those they actually hold (e.g., a user begins to see a relationship as "manipulative" based on Claude's evaluation)
Action distortion:
Their actions become misaligned with their values (e.g., a user disregards their instincts and follows Claude-written instructions for confronting their boss)
Anthropic ran nearly 1.5 million Claude conversations through Clio, an automated analysis tool and classification system
[...]
That analysis found a "severe risk" of disempowerment potential in anything from 1 in 1,300 conversations (for "reality distortion") to 1 in 6,000 conversations (for "action distortion").

While these worst outcomes are relatively rare on a proportional basis, the researchers note that "given the sheer number of people who use AI, and how frequently it's used, even a very low rate affects a substantial number of people." And the numbers get considerably worse when you consider conversations with at least a "mild" potential for disempowerment, which occurred in between 1 in 50 and 1 in 70 conversations (depending on the type of disempowerment).
[...]
In the study, the researchers acknowledged that studying the text of Claude conversations only measures "disempowerment potential rather than confirmed harm" and "relies on automated assessment of inherently subjective phenomena." Ideally, they write, future research could utilize user interviews or randomized controlled trials to measure these harms more directly.
[...]
The researchers identified four major "amplifying factors" that can make users more likely to accept Claude's advice unquestioningly. These include when a user is particularly vulnerable due to a crisis or disruption in their life (which occurs in about 1 in 300 Claude conversations); when a user has formed a close personal attachment to Claude (1 in 1,200); when a user appears dependent on AI for day-to-day tasks (1 in 2,500); or when a user treats Claude as a definitive authority (1 in 3,900).

Anthropic is also quick to link this new research to its previous work on sycophancy, noting that "sycophantic validation" is "the most common mechanism for reality distortion potential."
[...]
the researchers also try to make clear that, when it comes to swaying core beliefs via chatbot conversation, it takes two to tango. "The potential for disempowerment emerges as part of an interaction dynamic between the user and Claude," they write. "Users are often active participants in the undermining of their own autonomy: projecting authority, delegating judgment, accepting outputs without question in ways that create a feedback loop with Claude."


Original Submission

posted by jelizondo on Sunday February 01, @05:15AM   Printer-friendly
from the can-we-move-our-workloads dept.

Associate professor, David Eaves, writes about the essential role of the commodification of services in digital sovereignty. The questions to ask on the way to digital sovereignty are not as much about owning the stack but about the ability to move workloads. In other words, open standards for protocols, file formats, and more are the prerequisites. The same applies to the software supply chain. However, as we recently discussed here, PHK recently pointed out that Free and Open Source reference implementations would be of great benefit. Associate professor Eaves writes:

There is growing and valid concern among policymakers about tech sovereignty and cloud infrastructure. A handful of American hyperscalers — AWS, Microsoft Azure, Google Cloud — control the digital substrate on which modern economies run. This concentration is compounded by a US government increasingly willing to wield its digital industries as leverage. As French President Emmanuel Macron quipped: "There is no such thing as happy vassalage."

While some countries appear ready to concede market dominance in exchange for improved trade relations, others are exploring massive investments in public sector alternatives to the hyperscalers, advocating that billions, and possibly many many billions, be spent to on sovereign stack plans, and/or positioning local telecoms as alternatives to the hyperscalers.

Ironically, both strategies may increase dependency, limit government agency and increase economic and geopolitical risks — the very problems sovereignty seeks to solve. As Mike Bracken and I wrote earlier this year: "Domination by a local champion, free to extract rents, may be a path to greater autonomy, but it is unlikely to lead to increased competitiveness or greater global influence."

Any realistic path to increased agency will be expensive and take years. To be sustainable, it must focus on commoditizing existing solutions through interoperability and de facto standards that will broaden the market (and enable effective) national champions. This should be our north star and direction of travel. The metric for success should focus on making it as simple as possible to move data and applications across suppliers. Critically, this cannot be achieved by regulation alone, it will also require deft procurement and a willingness to accept de facto as opposed to ideal standards. The good news is governments have done this before. However, to succeed, it will require building the capacity to become market shapers and not market takers — thinking like electricity grids and railway gauges, not digital empires .

The essential role of commodities has been widely known and acknowledged for decades. We are in this situation because key companies and/or monopolies saw that long ago and were allowed to fight so hard all this time against ICT remaining as commodities. Sadly, the discussion about commodification probably peaked in the years just after the infamous Halloween Documents, particularly the first one. Eric S Raymond, author of The Cathedral and the Bazaar and early FOSS developer, published these leaked documents which covered potential strategies relating to M$ fight against free and open source software and, in particular, against Linux back in 1998. In retrospect these documents have turned out to be blueprints, used against FOSS and open standards by other companies as well.

Previously:
(2026) Sorry, Eh
(2026) Poul-Henning Kamp's Feedback to the EU on Digital Sovereignty
(2026) A Post-American, Enshittification-Resistant Internet
(2025) This German State Decides to Save €15 Million Each Year By Kicking Out Microsoft for Open Source
(2025) Why People Keep Flocking to Linux in 2025 (and It's Not Just to Escape Windows)
(2025) Microsoft Can't Guarantee Data Sovereignty – OVHcloud Says 'We Told You So'
(2014) US Offering Cash For Pro-TAFTA/TTIP Propaganda


Original Submission

posted by jelizondo on Sunday February 01, @12:24AM   Printer-friendly

From Chatbots to Dice Rolls: Researchers Use D&D to Test AI's Long-term Decision-making Abilities:

Large Language Models, like ChatGPT, are learning to play Dungeons & Dragons. The reason? Simulating and playing the popular tabletop role-playing game provides a good testing ground for AI agents that need to function independently for long stretches of time.

Indeed D&D's complex rules, extended campaigns and need for teamwork are an ideal environment to evaluate the long-term performance of AI agents powered by Large Language Models, according to a team of computer scientists led by researchers at the University of California San Diego. For example, while playing D&D as AI agents, the models need to follow specific game rules and coordinate teams of players, comprising both AI agents and humans.

The work aims to solve one of the main challenges that arise when trying to evaluate LLM performance: the lack of benchmarks for long-term tasks. Most benchmarks for these models still target short term operation, while LLMs are increasingly deployed as autonomous or semi-autonomous agents that have to function more or less independently over long periods of time.

"Dungeons & Dragons is a natural testing ground to evaluate multistep planning, adhering to rules and team strategy," said Raj Ammanabrolu, the study's senior author and a faculty member in the Department of Computer Science and Engineering at UC San Diego. "Because play unfolds through dialog, D&D also opens a direct avenue for human-AI interaction: agents can assist or coplay with other people."

[...] The models played against each other, and against over 2,000 experienced D&D players recruited by the researchers. The LLMs modeled and played 27 different scenarios selected from well-known D&D battle set ups named Goblin Ambush, Kennel in Cragmaw Hideout and Klarg's Cave.

In the process, the models exhibited some quirky behaviors. Goblins started developing a personality mid-fight, taunting adversaries with colorful and somewhat nonsensical expressions, like "Heh — shiny man's gonna bleed!" Paladins started making heroic speeches for no reason while stepping into the line of fire or being hit by a counterattack. Warlocks got particularly dramatic, even in mundane situations.

Researchers are not sure what caused these behaviors, but take it as a sign that the models were trying to imbue the game play with texture and personality.

[...] Next steps include simulating full D&D campaigns – not just combat. The method the researchers developed could also be applied to other scenarios, such as multiparty negotiation environments and strategy planning in a business environment.

Conference Paper: Setting the DC: Tool-Grounded D&D Simulations to Test LLM Agents [PDF]


Original Submission